skip to main content


Search for: All records

Creators/Authors contains: "Pritchard, Michael"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Projecting climate change is a generalization problem: We extrapolate the recent past using physical models across past, present, and future climates. Current climate models require representations of processes that occur at scales smaller than model grid size, which have been the main source of model projection uncertainty. Recent machine learning (ML) algorithms hold promise to improve such process representations but tend to extrapolate poorly to climate regimes that they were not trained on. To get the best of the physical and statistical worlds, we propose a framework, termed “climate-invariant” ML, incorporating knowledge of climate processes into ML algorithms, and show that it can maintain high offline accuracy across a wide range of climate conditions and configurations in three distinct atmospheric models. Our results suggest that explicitly incorporating physical knowledge into data-driven models of Earth system processes can improve their consistency, data efficiency, and generalizability across climate regimes.

     
    more » « less
    Free, publicly-accessible full text available February 7, 2025
  2. Abstract

    High‐Resolution Multi‐scale Modeling Frameworks (HR)—global climate models that embed separate, convection‐resolving models with high enough resolution to resolve boundary layer eddies—have exciting potential for investigating low cloud feedback dynamics due to reduced parameterization and ability for multidecadal throughput on modern computing hardware. However low clouds in past HR have suffered a stubborn problem of over‐entrainment due to an uncontrolled source of mixing across the marine subtropical inversion manifesting as stratocumulus dim biases in present‐day climate, limiting their scientific utility. We report new results showing that this over‐entrainment can be partly offset by using hyperviscosity and cloud droplet sedimentation. Hyperviscosity damps small‐scale momentum fluctuations associated with the formulation of the momentum solver of the embedded large eddy simulation. By considering the sedimentation process adjacent to default one‐moment microphysics in HR, condensed phase particles can be removed from the entrainment zone, which further reduces entrainment efficiency. The result is an HR that can produce more low clouds with a higher liquid water path and a reduced stratocumulus dim bias. Associated improvements in the explicitly simulated sub‐cloud eddy spectrum are observed. We report these sensitivities in multi‐week tests and then explore their operational potential alongside microphysical retuning in decadal simulations at operational 1.5° exterior resolution. The result is a new HR having desired improvements in the baseline present‐day low cloud climatology, and a reduced global mean bias and root mean squared error of absorbed shortwave radiation. We suggest it should be promising for examining low cloud feedbacks with minimal approximation.

     
    more » « less
  3. Abstract

    Climate models are essential to understand and project climate change, yet long‐standing biases and uncertainties in their projections remain. This is largely associated with the representation of subgrid‐scale processes, particularly clouds and convection. Deep learning can learn these subgrid‐scale processes from computationally expensive storm‐resolving models while retaining many features at a fraction of computational cost. Yet, climate simulations with embedded neural network parameterizations are still challenging and highly depend on the deep learning solution. This is likely associated with spurious non‐physical correlations learned by the neural networks due to the complexity of the physical dynamical system. Here, we show that the combination of causality with deep learning helps removing spurious correlations and optimizing the neural network algorithm. To resolve this, we apply a causal discovery method to unveil causal drivers in the set of input predictors of atmospheric subgrid‐scale processes of a superparameterized climate model in which deep convection is explicitly resolved. The resulting causally‐informed neural networks are coupled to the climate model, hence, replacing the superparameterization and radiation scheme. We show that the climate simulations with causally‐informed neural network parameterizations retain many convection‐related properties and accurately generate the climate of the original high‐resolution climate model, while retaining similar generalization capabilities to unseen climates compared to the non‐causal approach. The combination of causal discovery and deep learning is a new and promising approach that leads to stable and more trustworthy climate simulations and paves the way toward more physically‐based causal deep learning approaches also in other scientific disciplines.

     
    more » « less
  4. Abstract For the Community Atmosphere Model version 6 (CAM6), an adjustment is needed to conserve dry air mass. This adjustment exposes an inconsistency in how CAM6’s energy budget incorporates water—in CAM6 water in the vapor phase has energy, but condensed phases of water do not. When water vapor condenses, only its latent energy is retained in the model, while its remaining internal, potential, and kinetic energy are lost. A global fixer is used in the default CAM6 model to maintain global energy conservation, but locally the energy tendency associated with water changing phase violates the divergence theorem. This error in energy tendency is intrinsically tied to the water vapor tendency, and reaches its highest values in regions of heavy rainfall, where the error can be as high as 40 W m −2 annually averaged. Several possible changes are outlined within this manuscript that would allow CAM6 to satisfy the divergence theorem locally. These fall into one of two categories: 1) modifying the surface flux to balance the local atmospheric energy tendency and 2) modifying the local atmospheric tendency to balance the surface plus top-of-atmosphere energy fluxes. To gauge which aspects of the simulated climate are most sensitive to this error, the simplest possible change—where condensed water still does not carry energy and a local energy fixer is used in place of the global one—is implemented within CAM6. Comparing this experiment with the default configuration of CAM6 reveals precipitation, particularly its variability, to be highly sensitive to the energy budget formulation. Significance Statement This study examines and explains spurious regional sources and sinks of energy in a widely used climate model. These energy errors result from not tracking energy associated with water after it transitions from the vapor phase to either liquid or ice. Instead, the model used a global fixer to offset the energy tendency related to the energy sources and sinks associated with condensed water species. We replace this global fixer with a local one to examine the model sensitivity to the regional energy error and find a large sensitivity in the simulated hydrologic cycle. This work suggests that the underlying thermodynamic assumptions in the model should be revisited to build confidence in the model-simulated regional-scale water and energy cycles. 
    more » « less
  5. null (Ed.)
  6. null (Ed.)
  7. null (Ed.)
    Abstract Neural networks are a promising technique for parameterizing subgrid-scale physics (e.g., moist atmospheric convection) in coarse-resolution climate models, but their lack of interpretability and reliability prevents widespread adoption. For instance, it is not fully understood why neural network parameterizations often cause dramatic instability when coupled to atmospheric fluid dynamics. This paper introduces tools for interpreting their behavior that are customized to the parameterization task. First, we assess the nonlinear sensitivity of a neural network to lower-tropospheric stability and the midtropospheric moisture, two widely studied controls of moist convection. Second, we couple the linearized response functions of these neural networks to simplified gravity wave dynamics, and analytically diagnose the corresponding phase speeds, growth rates, wavelengths, and spatial structures. To demonstrate their versatility, these techniques are tested on two sets of neural networks, one trained with a superparameterized version of the Community Atmosphere Model (SPCAM) and the second with a near-global cloud-resolving model (GCRM). Even though the SPCAM simulation has a warmer climate than the cloud-resolving model, both neural networks predict stronger heating/drying in moist and unstable environments, which is consistent with observations. Moreover, the spectral analysis can predict that instability occurs when GCMs are coupled to networks that support gravity waves that are unstable and have phase speeds larger than 5 m s −1 . In contrast, standing unstable modes do not cause catastrophic instability. Using these tools, differences between the SPCAM-trained versus GCRM-trained neural networks are analyzed, and strategies to incrementally improve both of their coupled online performance unveiled. 
    more » « less
  8. null (Ed.)
  9. Abstract

    We design a new strategy to load‐balance high‐intensity sub‐grid atmospheric physics calculations restricted to a small fraction of a global climate simulation's domain. We show why the current parallel load balancing infrastructure of Community Earth System Model (CESM) and Energy Exascale Earth Model (E3SM) cannot efficiently handle this scenario at large core counts. As an example, we study an unusual configuration of the E3SM Multiscale Modeling Framework (MMF) that embeds a binary mixture of two separate cloud‐resolving model grid structures that is attractive for low cloud feedback studies. Less than a third of the planet uses high‐resolution (MMF‐HR; sub‐km horizontal grid spacing) relative to standard low‐resolution (MMF‐LR) cloud superparameterization elsewhere. To enable MMF runs with Multi‐Domain cloud resolving models (CRMs), our load balancing theory predicts the most efficient computational scale as a function of the high‐intensity work's relative overhead and its fractional coverage. The scheme successfully maximizes model throughput and minimizes model cost relative to precursor infrastructure, effectively by devoting the vast majority of the processor pool to operate on the few high‐intensity (and rate‐limiting) high‐resolution (HR) grid columns. Two examples prove the concept, showing that minor artifacts can be introduced near the HR/low‐resolution CRM grid transition boundary on idealized aquaplanets, but are minimal in operationally relevant real‐geography settings. As intended, within the high (low) resolution area, our Multi‐Domain CRM simulations exhibit cloud fraction and shortwave reflection convergent to standard baseline tests that use globally homogenous MMF‐LR and MMF‐HR. We suggest this approach can open up a range of creative multi‐resolution climate experiments without requiring unduly large allocations of computational resources.

     
    more » « less